Goto

Collaborating Authors

 new york university


AI is already making online swindles easier. It could get much worse.

MIT Technology Review

AI is already making online swindles easier. It could get much worse. Some cybersecurity researchers say it's too early to worry about AI-orchestrated cyberattacks. Others say it could already be happening. Anton Cherepanov is always on the lookout for something interesting. And in late August last year, he spotted just that.


Robot Talk Episode 121 – Adaptable robots for the home, with Lerrel Pinto

Robohub

Claire chatted to Lerrel Pinto from New York University about using machine learning to train robots to adapt to new environments. Lerrel Pinto is an Assistant Professor of Computer Science at New York University (NYU). His research is aimed at getting robots to generalize and adapt in the messy world we live in. His lab focuses broadly on robot learning and decision making, with an emphasis on large-scale learning (both data and models); representation learning for sensory data; developing algorithms to model actions and behaviour; reinforcement learning for adapting to new scenarios; and building open-source, affordable robots.


Human souls DO exist... and here's the proof according to four leading scientists

Daily Mail - Science & tech

Do our spirits live on after death? For most people, the question doesn't seem to require much soul-searching. A colossal 83 per cent of adults in the US believe that human souls exist, according to a 2023 survey by the Pew Research Centre. Many religions believe that, when we die, our immortal souls survive or are reincarnated. While there has never been a scientific consensus, the debate is ongoing.


Satori: Towards Proactive AR Assistant with Belief-Desire-Intention User Modeling

Li, Chenyi, Wu, Guande, Chan, Gromit Yeuk-Yin, Turakhia, Dishita G, Quispe, Sonia Castelo, Li, Dong, Welch, Leslie, Silva, Claudio, Qian, Jing

arXiv.org Artificial Intelligence

Augmented Reality assistance are increasingly popular for supporting users with tasks like assembly and cooking. However, current practice typically provide reactive responses initialized from user requests, lacking consideration of rich contextual and user-specific information. To address this limitation, we propose a novel AR assistance system, Satori, that models both user states and environmental contexts to deliver proactive guidance. Our system combines the Belief-Desire-Intention (BDI) model with a state-of-the-art multi-modal large language model (LLM) to infer contextually appropriate guidance. The design is informed by two formative studies involving twelve experts. A sixteen within-subject study find that Satori achieves performance comparable to an designer-created Wizard-of-Oz (WoZ) system without relying on manual configurations or heuristics, thereby enhancing generalizability, reusability and opening up new possibilities for AR assistance.


The First Entirely AI-Generated Video Game Is Insanely Weird and Fun

WIRED

Minecraft remains remarkably popular a decade or so after it was first released, thanks to a unique mix of quirky gameplay and open world building possibilities. A knock-off called Oasis, released last month, captures much of the original game's flavor with a remarkable and weird twist. The entire game is generated not by a game engine and hand-coded rules, but by an AI model that dreams up each frame. Oasis was built by an Israeli AI startup called Decart in collaboration with Etched, a company that designs custom silicon, to demonstrate the potential of hardware optimized to power transformer-based AI algorithms. Oasis uses a transformer AI model, similar to the one that powers a large language model--only trained, apparently, on endless examples of people playing Minecraft, to dream up each new video frame in response to the previous one and to user input like clicks or mouse moves.


AI models let robots carry out tasks in unfamiliar environments

MIT Technology Review

The team, consisting of researchers from New York University, Meta, and the robotics company Hello Robot, hopes its findings will make it quicker and easier to teach robots new skills while helping them function within previously unseen domains. The approach could make it easier and cheaper to deploy robots in our homes. "In the past, people have focused a lot on the problem of'How do we get robots to do everything?' but not really asking'How do we get robots to do the things that they do know how to do--everywhere?'" says Mahi Shafiullah, a PhD student at New York University who worked on the project. "We looked at'How do you teach a robot to, say, open any door, anywhere?'" Teaching robots new skills generally requires a lot of data, which is pretty hard to come by.


Forthcoming machine learning and AI seminars: April 2024 edition

AIHub

This post contains a list of the AI-related seminars that are scheduled to take place between 9 April and 31 May 2024. All events detailed here are free and open for anyone to attend virtually. Title to be confirmed Speaker: Ananya Joshi (Carnegie Mellon University) Organised by: Carnegie Mellon University Zoom link is here. Foundation Models for a Sustainable Planet Speaker: Aditya Grover (UCLA) Organised by: Cornell University Zoom link is here. Revolutionizing Public Safety: 6G Wireless Unleashes Precision Positioning and Navigation Speaker: Eirini Tsiropoulou (University of New Mexico) Organised by: The University of Texas at San Antonio Zoom link is here.


Improvements & Evaluations on the MLCommons CloudMask Benchmark

Chennamsetti, Varshitha, Mehnaz, Laiba, Zhao, Dan, Ghosh, Banani, Samsonau, Sergey V.

arXiv.org Artificial Intelligence

In this paper, we report the performance benchmarking results of deep learning models on MLCommons' Science cloud-masking benchmark using a high-performance computing cluster at New York University (NYU): NYU Greene. MLCommons is a consortium that develops and maintains several scientific benchmarks that can benefit from developments in AI. We provide a description of the cloud-masking benchmark task, updated code, and the best model for this benchmark when using our selected hyperparameter settings. Our benchmarking results include the highest accuracy achieved on the NYU system as well as the average time taken for both training and inference on the benchmark across several runs/seeds. Our code can be found on GitHub. MLCommons team has been kept informed about our progress and may use the developed code for their future work.


The Good Robot Podcast: featuring Meredith Broussard

AIHub

Hosted by Eleanor Drage and Kerry Mackereth, The Good Robot is a podcast which explores the many complex intersections between gender, feminism and technology. In this episode we talk to Meredith Broussard, data journalism professor at the Arthur L. Carter Institute at New York University. She's also the author of Artificial Unintelligence, which made waves following its release in 2018 by claiming that AI was nothing more than really fancy math. We talk about why we need to bring a little bit more friction back into technology and her latest book More Than a Glitch, which argues that AI that's not designed to be accessible is bad for everyone, in the same way that raised curbs between the pavement and the street that you have to go down to cross the road makes urban outings difficult for lots of people, not just wheelchair users. Data journalist Meredith Broussard is an associate professor at the Arthur L. Carter Journalism Institute of New York University, research director at the NYU Alliance for Public Interest Technology, and the author of several books, including More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech and Artificial Unintelligence: How Computers Misunderstand the World.


VeriGen: A Large Language Model for Verilog Code Generation

Thakur, Shailja, Ahmad, Baleegh, Pearce, Hammond, Tan, Benjamin, Dolan-Gavitt, Brendan, Karri, Ramesh, Garg, Siddharth

arXiv.org Artificial Intelligence

In this study, we explore the capability of Large Language Models (LLMs) to automate hardware design by generating high-quality Verilog code, a common language for designing and modeling digital systems. We fine-tune pre-existing LLMs on Verilog datasets compiled from GitHub and Verilog textbooks. We evaluate the functional correctness of the generated Verilog code using a specially designed test suite, featuring a custom problem set and testing benches. Here, our fine-tuned open-source CodeGen-16B model outperforms the commercial state-of-the-art GPT-3.5-turbo model with a 1.1% overall increase. Upon testing with a more diverse and complex problem set, we find that the fine-tuned model shows competitive performance against state-of-the-art gpt-3.5-turbo, excelling in certain scenarios. Notably, it demonstrates a 41% improvement in generating syntactically correct Verilog code across various problem categories compared to its pre-trained counterpart, highlighting the potential of smaller, in-house LLMs in hardware design automation.